Skip to content

Conversation

@MoonBoi9001
Copy link
Member

Summary

Two optimizations for POI query handling in the index-node resolver:

  1. Make resolve_proof_of_indexing async - Removes synchronous block_on call that was blocking tokio worker threads while waiting for database queries. POI queries now properly yield to the async runtime.

  2. Parallelize publicProofsOfIndexing requests - Changes sequential for-loop to future::join_all for concurrent execution of batch POI requests.

Changes

  • server/index-node/src/resolver.rs
    • resolve_proof_of_indexing: fnasync fn, block_on(fut)fut.await
    • resolve_public_proofs_of_indexing: sequential loop → future::join_all
    • Added future import from graph::futures03

@MoonBoi9001 MoonBoi9001 requested a review from lutter January 26, 2026 17:50
@MoonBoi9001 MoonBoi9001 force-pushed the feat/async-poi-resolver branch from 8045272 to b6cd094 Compare January 26, 2026 19:57
MoonBoi9001 and others added 2 commits January 27, 2026 23:27
Remove synchronous `block_on` call in `resolve_proof_of_indexing` which
was blocking tokio worker threads while waiting for database queries.

Before this change, each POI query blocked an entire tokio worker thread.

After this change, POI queries properly yield to the async runtime while
waiting for database I/O, allowing the connection pool to be fully utilized.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Change the publicProofsOfIndexing resolver to process all POI requests
in parallel using `future::join_all` instead of sequentially in a for loop.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@MoonBoi9001 MoonBoi9001 force-pushed the feat/async-poi-resolver branch from b6cd094 to 888a12e Compare January 28, 2026 03:27
@lutter
Copy link
Collaborator

lutter commented Jan 28, 2026

The first commit is great and something that absolutely needs to happen. For the second commit, I am not quite clear on what the motivation is - are PoI requests slow? The worry I have is that when graph-node is under a lot of load, parallelizing these requests makes the situation worse. The code limits the number of requests to 10, which is good, but I'd still like to understand what the motivation for this change is.

@MoonBoi9001
Copy link
Member Author

The motivation is POI verification tooling for dispute investigation.

When investigating POI discrepancies (e.g., for arbitration disputes), we need to compute POIs across large block ranges to identify where divergence occurred.

With sequential processing, throughput is bottlenecked at ~2k POIs/second even though the database connection pool has capacity. The publicProofsOfIndexing endpoint accepts batch requests (up to 10), but processing them sequentially means we're not utilizing available parallelism. Checking every block sequentially on a network with 100m blocks takes 13+ hours at ~2k blocks/second.

Pre-fetch all block hashes in a single batch query before parallel POI
processing, reducing database round-trips from 10+ to 1-2 per batch.

- Add block_hashes_by_block_numbers batch method to ChainStore trait
- Add get_public_proof_of_indexing_with_block_hash to StatusStore trait
- Modify resolver to group requests by network and batch-fetch hashes
- Pass pre-fetched hashes to avoid redundant lookups during parallel POI

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@MoonBoi9001 MoonBoi9001 force-pushed the feat/async-poi-resolver branch from 8b2a3fc to 4f89b3f Compare January 29, 2026 02:10
@MoonBoi9001
Copy link
Member Author

The worry I have is that when graph-node is under a lot of load, parallelizing these requests makes the situation worse.

The third commit addresses this, now pre-fetch all block hashes in a single batch query before parallel processing, reducing DB round-trips.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants